The ultimatum game is a popular experimental economics game in which two players interact to decide how to divide a sum of money, first described by Nobel laureate John Harsanyi in 1961. The first player, the proposer, proposes a division of the sum with the second player, the responder. The responder can either accept the proposed division or reject it. If the responder accepts, the money is split according to the proposal; if the responder rejects, neither player receives anything. Both players know in advance the rules of the game.
The game is typically designed as a one-shot interaction to isolate immediate reactions to fairness, thereby minimizing the influence of potential future interactions. However, even within this one-shot context, participants' decision-making processes may implicitly involve considering the potential consequences of Repeated game, due to the fact that humans have evolved within societies that interact repeatedly. This design is crucial for observing pure, unadulterated responses to the proposed division.
A Nash equilibrium is a set of strategies (one for the proposer and one for the responder in this case), where no individual party can improve their reward by changing strategy. If the proposer always makes an unfair offer, the responder will do best by always accepting the offer, and the proposer will maximize their reward. Although it always benefits the responder to accept even unfair offers, the responder can adopt a strategy that rejects unfair splits often enough to induce the proposer to always make a fair offer. Any change in strategy by the proposer will lower their reward. Any change in strategy by the responder will result in the same reward or less. Thus, there are two sets of Nash equilibria for this game:
The ultimatum game is also often modelled using a continuous strategy set. Suppose the proposer chooses a share S of a pie to offer the receiver, where S can be any real number between 0 and 1, inclusive. If the receiver accepts the offer, the proposer's payoff is (1-S) and the receiver's is S. If the receiver rejects the offer, both players get zero. The unique subgame perfect equilibrium is ( S=0, Accept). It is weak because the receiver's payoff is 0 whether they accept or reject. No share with S > 0 is subgame perfect, because the proposer would deviate to S' = S - for some small number and the receiver's best response would still be to accept. The weak equilibrium is an artifact of the strategy space being continuous.
Despite the outcome predicted by Nash equilibrium, numerous experimental studies have shown that people behave quite differently.
Oosterbeek et al. (2004) reviewed 37 studies and found that the first player typically offers around 40% of the "pie," although the percentage tends to decrease with larger pie sizes and when the players lack experience. Similarly, Andersen et al. (2018) observed that the rejection rate of unfair offers declines as the size of the pie being divided increases. Cooper and Dutcher (2011) reviewed a wide range of studies and found that experienced players tend to accept higher offers and reject lower ones.
Bolton and Zwick (1995) conducted a controlled experiment using the ultimatum game, in which they systematically varied both the level of anonymity between players and the experimenter, as well as the players' capacity to impose punitive measures. Their findings indicated that increasing anonymity led to a rise in the proportion of outcomes consistent with the Nash equilibrium, from 30% to 46%. More strikingly, when the capacity to punish was eliminated, the rate of Nash-consistent outcomes increased from 30% to nearly 100%. The authors concluded that the ability to punish accounts for deviations from the Nash equilibrium to a greater extent than anonymity.
Similarly, Charness and Gneezy (2008) found that while the disclosure of recipients' surnames significantly increased generosity in the dictator game, it had no such effect in the ultimatum game. They inferred from this result that strategic motives tend to outweigh altruistic considerations in such interactions.
The classical explanation of the ultimatum game as a well-formed experiment approximating general behaviour often leads to a conclusion that the rational behavior in assumption is accurate to a degree, but must encompass additional vectors of decision making. Behavioral economic and psychological accounts suggest that second players who reject offers less than 50% of the amount at stake do so for one of two reasons. An altruistic punishment account suggests that rejections occur out of altruism: people reject unfair offers to teach the first player a lesson and thereby reduce the likelihood that the player will make an unfair offer in the future. Thus, rejections are made to benefit the second player in the future, or other people in the future. By contrast, a self-control account suggests that rejections constitute a failure to inhibit a desire to punish the first player for making an unfair offer. Morewedge, Krishnamurti, and Ariely (2014) found that intoxicated participants were more likely to reject unfair offers than sober participants. As intoxication tends to exacerbate decision makers' prepotent response, this result provides support for the self-control account, rather than the altruistic punishment account. Other research from social cognitive neuroscience supports this finding.
However, several competing models suggest ways to bring the cultural preferences of the players within the optimized Utility of the players in such a way as to preserve the utility maximizing agent as a feature of . For example, researchers have found that proposers tend to offer even splits despite knowing that very unequal splits are almost always accepted. Similar results from other small-scale societies players have led some researchers to conclude that "reputation" is seen as more important than any economic reward. Mongolian/Kazakh study conclusion from University of Pennsylvania. Others have proposed the social status of the responder may be part of the payoff. Social Role in the Ultimate Game Another way of integrating the conclusion with utility maximization is some form of inequity aversion model (preference for fairness). Even in anonymous one-shot settings, the economic-theory suggested outcome of minimum money transfer and acceptance is rejected by over 80% of the players.
An explanation which was originally quite popular was the "learning" model, in which it was hypothesized that proposers' offers would decay towards the sub game perfect Nash equilibrium (almost zero) as they mastered the strategy of the game; this decay tends to be seen in other iterated games. However, this explanation (bounded rationality) is less commonly offered now, in light of subsequent empirical evidence.
It has been hypothesized (e.g. by James Surowiecki) that very unequal allocations are rejected only because the absolute amount of the offer is low. The concept here is that if the amount to be split were 10 million dollars, a 9:1 split would probably be accepted rather than rejecting a 1 million-dollar offer. Essentially, this explanation says that the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour. However, many experiments have been performed where the amount offered was substantial: studies by Cameron and Hoffman et al. have found that higher stakes cause offers to approach closer to an even split, even in a US$100 game played in Indonesia, where average per-capita income is much lower than in the United States. Rejections are reportedly independent of the stakes at this level, with US$30 offers being turned down in Indonesia, as in the United States, even though this equates to two weeks' wages in Indonesia. However, 2011 research with stakes of up to 40 weeks' wages in India showed that "as stakes increase, rejection rates approach zero". It is worth noting that the instructions offered to proposers in this study explicitly state, "if the responder's goal is to earn as much money as possible from the experiment, they should accept any offer that gives them positive earnings, no matter how low," thus framing the game in purely monetary terms.
Rejections in the ultimatum game have been shown to be caused by adverse physiologic reactions to stingy offers.Sanfey, et al. (2002) In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers) differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral disgust. If Player 1 in the ultimatum game anticipates this response to a stingy offer, they may be more generous.
An increase in rational decisions in the game has been found among experienced Buddhism meditation. fMRI data show that meditators recruit the posterior insular cortex (associated with interoception) during unfair offers and show reduced activity in the anterior insular cortex compared to controls.
People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with normal serotonin levels.
People who have ventromedial frontal cortex lesions were found to be more likely to reject unfair offers. This was suggested to be due to the abstractness and delay of the reward, rather than an increased emotional response to the unfairness of the offer.
The extent to which people are willing to tolerate different distributions of the reward from "cooperation" ventures results in inequality that is, measurably, exponential across the strata of management within large corporations. See also: Inequity aversion within companies.
Josh Clark attributes modern interest in the game to Ariel Rubinstein, but the best-known article is the 1982 experimental analysis of Güth, Schmittberger, and Schwarze., page 367: the description of the game at Neuroeconomics cites this as the earliest example. Results from testing the ultimatum game challenged the traditional economic principle that consumers are rational and utility-maximising. This started a variety of research into the psychology of humans. Since the ultimatum game's development, it has become a popular economic experiment, and was said to be "quickly catching up with the Prisoner's Dilemma as a prime showpiece of apparently irrational behavior" in a paper by Martin Nowak, Karen M. Page, and Karl Sigmund.
In the "ultimatum game with tipping", a tip is allowed from responder back to proposer, a feature of the trust game, and net splits tend to be more equitable., p. 247.
The "reverse ultimatum game" gives more power to the responder by giving the proposer the right to offer as many divisions of the endowment as they like. Now the game only ends when the responder accepts an offer or abandons the game, and therefore the proposer tends to receive slightly less than half of the initial endowment.The reverse ultimatum game and the effect of deadlines is from
Incomplete information ultimatum games: Some authors have studied variants of the ultimatum game in which either the proposer or the responder has private information about the size of the pie to be divided. These experiments connect the ultimatum game to principal-agent problems studied in contract theory.
The pirate game illustrates a variant with more than two participants with voting power, as illustrated in Ian Stewart's "A Puzzle for Pirates".
|
|